Behavioral Data Analysis with R and Python by Florent Buisson

Behavioral Data Analysis with R and Python by Florent Buisson

Author:Florent Buisson [Florent Buisson]
Language: eng
Format: epub
ISBN: 9781492061373
Publisher: O'Reilly Media, Inc.
Published: 2021-06-22T00:00:00+00:00


I am no big fan of using an arbitrary number purely because it’s conventional, and you should feel free to adjust the “80% power” convention to fit your needs. Using a power of 80% for your relevant threshold effect size would mean that if the intervention had exactly that effect size, on average, you would have a 20% chance of not implementing the intervention because you wrongly got a negative result. For big and costly interventions that are hard to test, my opinion is that it’s too low and I would personally target 90% power. On the other hand, the higher the power you want, the larger your sample size will need to be. You may not want to spend half a year getting absolutely certain of the value of the 1-click button if in that time your competitor has completely revamped their website twice and is eating your lunch.

In my personal experience, one key but often ignored consideration for power analysis and sample size determination in the real world is organizational testing velocity: how many experiments can you run in a year? In many companies, that number is constrained by someone’s time (either the analyst’s or the business partner’s), by the company’s planning cycle, by budget limits, etc., but not by the number of customers available. If you can realistically hope to plan, test, and implement only one intervention per year, do you really want to run a three-month experiment and then do nothing for the rest of the year? On the other hand, if you can run one experiment a week, do you really want to spend three months getting certain of a positive but mediocre impact instead of taking 12 chances at a big one? Therefore, after doing the math, you should always do a sanity check of your experiment duration based on your testing velocity and adjust it appropriately.

Regarding statistical significance, the conventional approach introduces an asymmetry between the control and the treatment with a statistical significance threshold of 95%. The bar of evidence the treatment has to pass to get implemented is much higher than for the control, which is implemented by default. Let’s say that you’re setting up a new marketing email campaign and you have two options to test. Why should one version be given the benefit of the doubt over the other? On the other hand, if you have a campaign that has been running for years and for which you have run hundreds of tests, the current version is probably extremely good and a 5% chance of wrongly abandoning it might be too high; the right threshold here might be 99% instead of 95%. More broadly, relying on a conventional value that is the same for all experiments feels to me like a missed opportunity to reflect on the respective costs of false positives and false negatives. In the case of the 1-click button, which is easily reversible and has minimal costs of implementation, I would target a statistical significance threshold of 90% as well.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.